Distributed dual gradient methods and error bound conditions
نویسندگان
چکیده
In this paper we propose distributed dual gradient algorithms for linearly constrained separable convex problems and analyze their rate of convergence under different assumptions. Under the strong convexity assumption on the primal objective function we propose two distributed dual fast gradient schemes for which we prove sublinear rate of convergence for dual suboptimality but also primal suboptimality and feasibility violation for an average primal sequence or for the last generated primal iterate. Under the additional assumption of Lipshitz continuity of the gradient of the primal objective function we prove a global error bound type property for the dual problem and then we analyze a dual gradient scheme for which we derive global linear rate of convergence for both dual and primal suboptimality and primal feasibility violation. We also provide numerical simulations on optimal power flow problems.
منابع مشابه
New Analysis of Linear Convergence of Gradient-type Methods via Unifying Error Bound Conditions
The subject of linear convergence of gradient-type methods on non-strongly convex optimization has been widely studied by introducing several notions as sufficient conditions. Influential examples include the error bound property, the restricted strongly convex property, the quadratic growth property, and the KurdykaLojasiewicz property. In this paper, we first define a group of error bound con...
متن کاملAn Accelerated Gradient Method for Distributed Multi-Agent Planning with Factored MDPs
We study optimization for collaborative multi-agent planning in factored Markov decision processes (MDPs) with shared resource constraints. Following past research, we derive a distributed planning algorithm for this setting based on Lagrangian relaxation: we optimize a convex dual function which maps a vector of resource prices to a bound on the achievable utility. Since the dual function is n...
متن کاملA Finite-Time Dual Method for Negotiation between Dynamical Systems
Abstract. This work presents a distributed algorithm for online negotiations of an optimal control policy between dynamical systems. We consider a network of self-interested agents that must agree upon a common state within a specified finite-time. The proposed algorithm exploits the distributed structure of the corresponding dual problem and uses a “shrinking horizon” property to enforce the f...
متن کاملTrading Computation for Communication: Distributed Stochastic Dual Coordinate Ascent
We present and study a distributed optimization algorithm by employing a stochastic dual coordinate ascent method. Stochastic dual coordinate ascent methods enjoy strong theoretical guarantees and often have better performances than stochastic gradient descent methods in optimizing regularized loss minimization problems. It still lacks of efforts in studying them in a distributed framework. We ...
متن کاملDistributed Stochastic Variance Reduced Gradient Methods and A Lower Bound for Communication Complexity
We study distributed optimization algorithms for minimizing the average of convex functions. The applications include empirical risk minimization problems in statistical machine learning where the datasets are large and have to be stored on different machines. We design a distributed stochastic variance reduced gradient algorithm that, under certain conditions on the condition number, simultane...
متن کامل